How To Think About AI: Is It The Tool, Or Are You?

from the do-you-use-your-brain-or-do-you-replace-it? dept

We live in a stupidly polarizing world where nuance is apparently not allowed. Everyone wants you to be for or against something—and nowhere is this more exhausting than with AI. There are those who insist that it’s all bad and there is nothing of value in it. And there are those who think it’s all powerful, the greatest thing ever, and will replace basically every job with AI bots who can work better and faster.

I think both are wrong, but it’s important to understand why.

So let me lay out how I actually think about it. When it’s used properly, as a tool to assist a human being in accomplishing a goal, it can be incredibly powerful and valuable. When it’s used in a way where the human’s input and thinking are replaced, it tends to do very badly.

And that difference matters.

I think back to a post from Cory Doctorow a couple months ago where he tried to make the same point using a different kind of analogy: centaurs and reverse-centaurs.

Start with what a reverse centaur is. In automation theory, a “centaur” is a person who is assisted by a machine. You’re a human head being carried around on a tireless robot body. Driving a car makes you a centaur, and so does using autocomplete.

And obviously, a reverse centaur is a machine head on a human body, a person who is serving as a squishy meat appendage for an uncaring machine.

Like an Amazon delivery driver, who sits in a cabin surrounded by AI cameras, that monitor the driver’s eyes and take points off if the driver looks in a proscribed direction, and monitors the driver’s mouth because singing isn’t allowed on the job, and rats the driver out to the boss if they don’t make quota.

The driver is in that van because the van can’t drive itself and can’t get a parcel from the curb to your porch. The driver is a peripheral for a van, and the van drives the driver, at superhuman speed, demanding superhuman endurance. But the driver is human, so the van doesn’t just use the driver. The van uses the driver up.

Obviously, it’s nice to be a centaur, and it’s horrible to be a reverse centaur.

As Doctorow notes in his piece, some of the companies embracing AI tech are doing so with the goal of building reverse-centaurs. Those are the ones that people are, quite understandably, uncomfortable with and should be mocked. But the reality is, also, it seems quite likely those efforts will fail.

And they’ll fail not just because they’re dehumanizing—though they are—but because the output is garbage. Hallucinations, slop, confidently wrong answers: that’s what happens when nobody with actual knowledge is checking whether any of it makes sense. When AI works well, it’s because a human is providing the knowledge and the creativity.

The reverse-centaur doesn’t just burn out the human. It produces worse work, because it assumes that the AI can provide the knowledge or the creativity. It can’t. That requires a human. The power of AI tools is in enabling a human to take their own knowledge, and their own creativity and enhance it, to do more with it, based on what the person actually wants.

To me it’s a simple question of “what’s the tool?” Is it the AI, used thoughtfully by a human to do more than they otherwise could have? If so, that’s a good and potentially positive use of AI. It’s the centaur in Doctorow’s analogy.

Or is the human the tool? Is it a “reverse centaur”? I think nearly all of those are destined to fail.

This is why I tend not to get particularly worked up by those who claim that AI is going to destroy jobs and wipe out the workforce, who will be replaced by bots. It just… doesn’t work that way.

At the same time, I find it ridiculous to see people still claiming that the technology itself is no good and does nothing of value. That’s just empirically false. Plenty of people—including myself—get tremendous use out of the technology. I am using it regularly in all different ways. It’s been two years since I wrote about how I used it to help as a first pass editor.

The tech has gotten dramatically better since then, but the key insight to me is what it takes to make it useful: context is everything. My AI editor doesn’t just get my draft writeup and give me advice based on that and its training—it also has a sampling of the best Techdirt articles, a custom style guide with details about how I write, a deeply customized system prompt (the part of AI tools that are often hidden from public view) and a deeply customized starting prompt. It also often includes the source articles I’m writing about. With all that context, it’s an astoundingly good editor. Sometimes it points out weak arguments I missed entirely. Sometimes it has nothing to say.

(As an aside, in this article, it suggested I went on way too long explaining all the context I give it to give me better suggestions, and thus I shortened it to just the paragraph above this one).

It’s not always right. Its suggestions are not always good. But that’s okay, because I’m not outsourcing my brain to it. It’s a tool. And way more often than not, it pushes me to be a better writer.

This is why I get frustrated every time people point out a single AI fail or hallucination without context.

The problem only comes in when people outsource their brains. When they become reverse centaurs. When they are the tool instead of using AI as the tool. That’s when hallucinations or bad info matter.

But if the human is in control, if they’re using their own brain, if they’re evaluating what the tool is suggesting or recommending and making the final decision, then it can be used wisely and can be incredibly helpful.

And this gets at something most people miss entirely: when they think about AI, they’re still imagining a chatbot. They think every AI tool is ChatGPT. A thing you talk to. A thing that generates text or images for you to copy-paste somewhere else.

That’s increasingly not where the action is. The more powerful shift is toward agentic AI—tools that don’t just generate content, but actually do things. They write code and run it. They browse the web and synthesize what they find. They execute multi-step tasks with minimal hand-holding. This is a fundamentally different model than “ask a chatbot a question and get an answer.”

I’ve been using Claude Code recently, and this distinction matters. It’s an agent that can plan, execute, and iterate on actual software projects, rather than just a tool talking to me about what to do. But, again, that doesn’t mean I just outsource my brain to it.

I often put Claude Code into plan mode, where it tries to work out a plan, but then I spend quite a lot of time exploring why it was making certain decisions, and asking it to explore the pros and cons of those decisions, and even to provide me with alternative sources to understand the trade-offs of some of the decisions it is recommending. That back and forth has been both educational for me, but also makes me have a better understanding and be comfortable with the eventual projects I use Claude Code to build.

I am using it as a tool, and part of that is making sure I understand what it’s doing. I am not outsourcing my brain to it. I am using it, carefully, to do things that I simply could not have done before.

And that’s powerful and valuable.

Yes, there are so many bad uses of AI tools. And yes, there is a concerted, industrial-scale effort, to convince the public they need to use AI in ways that they probably shouldn’t, or in ways that is actively harmful. And yes, there are real questions about what it costs to train and run the foundation models. And we should discuss those and call those out for what they are.

But the people who insist the tools are useless and provide nothing of value, that’s just wrong. Similarly, anyone who thinks the tech is going to go away are entirely wrong. There likely is a funding bubble. And some companies will absolutely suffer as it deflates. But it won’t make the tech go away.

When used properly, it’s just too useful.

As Cory notes in his centaur piece, AI can absolutely help you do your job, but the industry’s entire focus is on convincing people it can replace your job. That’s the con. The tech doesn’t replace people. But it can make them dramatically more capable—if they stay in the driver’s seat.

The key to understanding the good and the bad of the AI hype is understanding that distinction. Cory explains this in reference to AI coding:

Think of AI software generation: there are plenty of coders who love using AI, and almost without exception, they are senior, experienced coders, who get to decide how they will use these tools. For example, you might ask the AI to generate a set of CSS files to faithfully render a web-page across multiple versions of multiple browsers. This is a notoriously fiddly thing to do, and it’s pretty easy to verify if the code works – just eyeball it in a bunch of browsers. Or maybe the coder has a single data file they need to import and they don’t want to write a whole utility to convert it.

Tasks like these can genuinely make coders more efficient and give them more time to do the fun part of coding, namely, solving really gnarly, abstract puzzles. But when you listen to business leaders talk about their AI plans for coders, it’s clear they’re not looking to make some centaurs.

They want to fire a lot of tech workers – they’ve fired 500,000 over the past three years – and make the rest pick up their work with coding, which is only possible if you let the AI do all the gnarly, creative problem solving, and then you do the most boring, soul-crushing part of the job: reviewing the AIs’ code.

Criticize the hype. Mock the replace-your-workforce promises. Call out the slop factories and the gray goo doomsaying. But don’t mistake the bad uses for the technology itself. When a human stays in control—thinking, evaluating, deciding—it’s a genuinely powerful tool. The important question is just whether you’re using it, or it’s using you.

Filed Under: , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “How To Think About AI: Is It The Tool, Or Are You?”

Subscribe: RSS Leave a comment
46 Comments
Drew Wilson (user link) says:

This is why I tend not to get particularly worked up by those who claim that AI is going to destroy jobs and wipe out the workforce, who will be replaced by bots. It just… doesn’t work that way.

At the same time, I find it ridiculous to see people still claiming that the technology itself is no good and does nothing of value.

I take a much different approach. I point and laugh at the people honestly believing that AI simply does everything better and getting burned by that terrible decision of just giving that work to AI – letting AI just do everything. I’ve been maintaining a running tally for a while now and that list just keeps growing. It’s very useful for whenever people come out and claim that AI does everything perfectly.

This is, by no means, proclaiming that nuance is not allowed. This is just laughing at morons who suddenly consider themselves experts when they clearly are not. This while clearly demonstrating that you can’t just ‘leave it all to AI’ which, honestly, is not a terrible message to be sending by any means.

Anonymouse says:

Re:

Ultimately, the question isn’t going to be whether or not AI can do something better than a human, it’s going to be whether or not it can do it cheaper. Factories don’t build better furniture than human carpenters; they build cheaper furniture. If AI tools can be trained to do a job at a mediocre level for a much lower cost, it won’t matter that the AIs aren’t nearly as good as humans. Most businesses will happily make that tradeoff to save some money.

This is why people are scared of AI. It’s not that they think it can do their jobs as well as they do. It’s that they don’t think their bosses care that much how well the job is done if it’s done at a really low cost.

Anonymous Coward says:

Like every tool it takes time for people to understand its strengths and weaknesses and how it works best for them. First we had computers with word processing and spreadsheets and some people embraced them and others shunned them. Then the Internet came along and the same thing happened. Now it is inconceivable that you could work without any of those tools.

AI tools are constantly evolving and early versions were quickly embraced by people who churn out lots of words like teachers and bureaucrats. Some of us found the slop objectionable, but no more so than that already turned out by those teachers and bureaucrats. The tools are now evolving and the current generation that specifically provides services of generating content based on its training for specific documents are really useful.

Anonymous Coward says:

it’s pretty easy to verify if the code works

But tiresome to check if the code is actually valid. Because nobody will spend hours building on clumsy foundations if a single change can mess with the whole.
Some goes when repaing a car or cooking a meal: does it look good? Who cares, it needs to be good, mostly because nobody cares how or what you’ve done.

Tasks like these can […] give them more time to do the fun part of coding, namely, solving really gnarly, abstract puzzles.

Solving abstract puzzles is not “fun”, and never is when working a real things. But you may get some satisfaction (“satisfying” if certainly the correct word instead of “fun”) when completing a complex project, even being helped with some tool or people. Sure, you’re proud of it and it’s been a great pleasure. But it still must be good.

Same goes with this article:
* You’ve let AI do the job and just “vibe-fix” some details? Fine, but is the article good?
* You’ve manually wrote it down in one shot, Hemingway-style, in 30 minutes? Fine, but is the article good?
* You refused any help from anyone or any tool, spent 2 hours, and re-read it ten times? Fine, but is the article good?
* You’ve asked for help many times, to every person and tool possible, it took you a whole day, and you are sure it couldn’t be any better? Fine, but is the article good?

Anonymous Coward says:

This is why I tend not to get particularly worked up by those who claim that AI is going to destroy jobs and wipe out the workforce

This could still come true, just not in the way you think: if the AI bubble bursts, it could trigger an economic crash that destroys millions of jobs.

Meanwhile, there’s another technological revolution going on, that many people ignore: renewable energy. Wind and solar have become cheaper than fossil, are getting cheaper still, and innovation isn’t stopping. It’s an energy source that cannot be hoarded or monopolized, and that will greatly alter geopolitics. Many economic activities are energy-constrained, so the price drop will shift the balance of which projects are feasible.

People getting worked up about AI are focusing on the wrong thing.

Drew Wilson (user link) says:

Re:

I’m personally stoked over the idea of renewable energy being the main source of energy. No more fighting over patches of oil when you can just build another solar panel in your own country instead. It won’t solve the shortages of everything (there will still be various minerals that are going to still be mined to build all of this up), but I think a number of problems will get at least partially solved once the energy transition happens. This isn’t even getting into the obvious positive impacts it will have on the problem of climate change.

Anonymous Coward says:

Here is the massive problem with your framing.

It’s not a single issue or failure with AI. It’s all of them.

Is your code project worth nudes of your wife or kids?

Is it worth the deep fakes, voice theft, and other scams?

Is it worth the spam of slop?

Is it worth you not owning computer hardware anymore?

Is it worth more drought and higher energy costs?

Remember the next time you generate code, someone else is generating a nude picture.

And sure. You can use it in good ways. However the vast majority of usage is bad. The design, advertising, etc all lend towards encouraging trash.

Sit back and ask yourself if it is truly good, why are the majority of people against it and why do you need to constantly try convincing everyone that it isn’t bad. People didn’t have to be forced into adopting cell phones, computers, or cars.

ECA (profile) says:

Automation or Auto Intelligence

Automation is generally Simple and complex. But its repetitive and Very little changes.
Its the Robot Wielder that does Most of the Simple things on a car being Built, While the Human has to do the Finish points the Robot can get to, as well as the Inspections.
AI, is expecting to Much of doing a Simple task. Lets load up this Huge brain into the computer and let it decide What to do? Does not work. The Human went thru training, and the AI should also, but you can record it 1 time and then Place the training into Every AI.
When there are Changes, and Adaption that has to be Done on the Fly. You have an AI, that has to Figure something out. The Human brain has ‘Supposedly’ had Life experiences to fall back on to Figure things out. But how Small of an AI do you need? MOST AI is NOT done on your Phone. Disconnect the net from your phone, and Ask your phone to do something Easy, Like Play music. Nothing.

Then if we go for Glory, and the Full Monty of AI, What will we get over What DO Corps NEED. Corps dont need Something Smarter then they are, trying to tell them WHAT they should/should NOT do.
If you want a real Self thinking and developing AI that can Tall you FACTS and Maybe TEACH you something. Corps dont NEED that. They Need training for Simple tasks to Idiots. They need phone programs to Call and make Sales. They need a Smart phone system that can Direct you to hell and force you to like it.
The Glory that is being sought is for the Glory of Computing. Basic Wikipedia Ideal with Truth and facts. Its not there to REPLACE much of anything, but it Might have the Program stored someplace And make it Fit what is needed.
But, If we are going to let Every one and every nation Input Garbage, can you guess what we will get back?
GIGO

Adrian Lopez (profile) says:

Agentic AI

“The more powerful shift is toward agentic AI—tools that don’t just generate content, but actually do things.”

And this is the part that worries me most about where AI is headed. I believe the ability to just do things without a human veto and with perhaps minor human involvement will prove a big problem.

People have lost the contents of their entire hard drives to agentic AI because of commands being run in a non-sandboxed environment. And the consequences could be even more serious: Critical vulnerabilities will be introduced into software in spite of human reviewers’ best efforts. People will be denied for important things like jobs, loans, medical procedures because of agentic AI. Companies and organizations will send out false information to people, or forward confidential information to the wrong people because of agentic AI. And I’m sure there’s a bunch of scenarios that haven’t occurred to me.

With unprecedented power often comes unprecedented danger, and my experience with AI suggests it makes enough mistakes that are often subtle enough to be really dangerous (without exaggeration). I am genuinely worried.

ECA (profile) says:

Re: Fun part?

IF’ they create enough heat, they can generate Much of their own power, IF THEY THINK ABOUT THAT.
Alaska town had a Small hot spot of water, about 80-100 degrees. Everyone said it wouldnt work. They did it anyway. They installed a Generator to handle the Water pressure to turn a small Turbine. They found they only needed a 40 degree differential to Turn the Turbine. Winter is now powered at that small town.
If they Pipe that heat out and control it to Turn a few Turbines.
Understand computers Love a simple number. Not F’ Use Celsius. A Temp of 25-30 and the Computers raising the temp to 60-80, COULD power most of those systems. AND any residual heat could be Piped into Towns.
Its a funny idea that Steam wasnt used for HEATING, until some Big company Built a Steam plant, to Take Excess heat from Metal production and other Manufacturing.

Anonymous Coward says:

How many people are really putting in the time and level of effort you described? How many people are willing to? How many even know they should?

And: how many of the people who do know and are doing it do you think will always do it?
Will you or anyone else using it right really never cut any corners, however tiny… and then what’s a little more. And maybe a little more. It can do some more for me, I can just trust its output, I trained it so well.

It’s okay. I’m not outsourcing my brain.

Yet?

AIDAN, THOMASON says:

It’s hard to not believe AI will make daily life efficient and faster for decisions making, that is, if you keep your own wits about you. Blindly using AI later down the line will lead to less critical thinking skills I believe. Even now, people use it to thin for them, and gladly accepting the response. I know you briefly mentioned mentioned this idea before, and it is important to always think about. Now, what can we do to make sure it works alongside us, and not for us?

Arianity (profile) says:

When it’s used properly, as a tool to assist a human being in accomplishing a goal, it can be incredibly powerful and valuable. When it’s used in a way where the human’s input and thinking are replaced, it tends to do very badly.

I’d like to reframe this a bit- how does the Amazon driver make sure AI is used properly? I see kind of 3 main concerns:

Problem #1: Many jobs don’t require much creativity. The Amazon driver (or warehouse worker, driving still has some edge cases) is an example of this. We also see it with art, where the issue isn’t so much replacing Art (expressive), but it is good enough to replace the copy that pays the bills. We tend to be a bit self-centered, but jobs like writing is actually kind of the exception.

Problem #2: Working with AI requires you to move up the abstraction chain. But I’m not sure everyone can do that? You can’t expect everyone to be the equivalent of a PhD. Even among educated people, not every programmer is good at product design or management.

Problem #3: Even if everyone can do #2, it’s not clear there is demand for it. With the coding example- lets say you’re supervising agents, etc. How much output for human coders are you replacing? 10 people? Because if there’s still one person at the helm, yeah it’s ‘human oversight’, but you just effectively increased labor supply by 10x. This one may be solved by induced demand, but we don’t know.

Note that none of those require replacing human creativity or anything like that. You’re recognizing it’s not at the extremes of useless or all powerful, but I think you’re underestimating how disruptive “just a tool” can be. Your two personal use cases are kind of the rosier cases, they don’t have things like the labor dynamics that are problematic. In particular, I think people fall into a trap of treating it as being 1:1- you still need a human in the loop. But that misses that one manager can replace multiple humans.

Dan says:

A relevant article from Harvard Business Review

If you haven’t seen this, it’s interesting. They reinforce some of your points but also show the dangers that come from people being convinced Ai should be used to, as you put it, “do things that I simply could not have done before.”

There’s a difference between using Ai tools to speed up your workflow and using it to reconfigure and silo your workflow by cutting out other humans.

https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it

Anonymous Coward says:

Mike, you might be right. But do you have to save Democracy and AI at the same time? I suggest dividing your energies this way is unwise. Further, the enemies of Democracy are emboldened by any support for AI. So while you have and make good points, can they wait? Just don’t cover AI so much for a while. People will be more open to your ideas when they aren’t being rounded up house to house.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...